Qianming Blog
HOME
ARCHIVES
CATEGORIES
CV
HOME
ARCHIVES
CATEGORIES
CV
ClassNote_CIS5190
Qianming Huang
Three
2025-10-08 19:43:51
2025-10-09 01:47:41
Artificial Intelligence
Machine Learning
What is a standard CPP
Next posts
1.
ML Design Choices
2.
Linear Regression
2.1.
Linear Functions
2.2.
Loss Function
2.2.1.
Mean Squared Error (MSE)
2.3.
Linear Regression Problem
2.4.
Linear Regression With Feature Maps
2.4.1.
Feature Maps
2.4.2.
Algorithm for Non-linear Regression
2.5.
Training/Test Split
2.6.
Bias and Variance: way to assess overfitting and underfitting
2.6.1.
Effective Bias/Variance Under Varying Dataset Sizes
2.7.
Regularization: way to fix underfitting and overfitting
2.7.1.
Intuition on L2 Regularization
2.8.
Cross-Validation for Hyperparameter Tuning / Model Selection
2.8.1.
Hyperparameter Tuning
2.8.2.
Validation Set
2.8.2.1.
Cross Validation
2.9.
Other Performance Metrics
2.10.
Optimizing the MSE Loss
2.10.1.
Closed-Form Solution
2.10.2.
Gradient Descent
2.10.2.1.
Batch Gradient Descent & Stochastic Gradient Descent
2.10.3.
Optimizing Regularized Linear Regression
2.10.3.1.
L2-regularized Linear Regression
2.11.
Feature Preprocessing And Selection
2.11.1.
Feature Standardization
2.11.2.
Feature map
2.11.3.
Feature Selection
2.11.4.
Handling Missing Values
2.12.
L1 Regularization (LASSO) And Automatic Feature Selection
3.
Logistic Regression
3.1.
Linear Regression to Binary Classification
4.
Neural Networks
4.1.
A Framework Uniting Many ML Hypothesis Class
4.2.
Neural Net Building Blocks and Notation
4.2.1.
Basic Structure
4.3.
Forward Propagation
4.4.
Backward Pass
4.4.1.
Loss functions
4.4.2.
Optimizer
4.4.3.
Back Propagation
4.4.4.
Computational Graph
4.4.4.1.
Computing Rules for Each Nodes
4.5.
The Neural Net Toolbox
4.5.1.
Optimization
4.5.1.1.
Mini-Batch SGD
4.5.1.2.
Momentum Gradient Descent
4.5.1.3.
Adaptive Learing Rates
4.5.1.4.
Set the Learning Rate
4.5.2.
Activation Functions
4.5.3.
Managing Weights
4.5.3.1.
Initialization
4.5.3.2.
Batch Normalization
4.5.3.3.
Regularization
4.5.3.4.
Dropout
4.5.4.
Managing Training
4.5.5.
Data Augmentation
4.5.6.
Hyperparameter Optimization
4.5.6.1.
Coarse-to-fine Search
4.5.7.
Cross-validation vs a Single Validation Set
4.6.
Practical Tips for Training Neural Nets
5.
K-Nearest Neighbors
5.1.
The Definition of “Nearest”
5.2.
“Non-parametric” Machine Learning Approches
5.3.
Hyperparameters in KNN
1.
ML Design Choices
2.
Linear Regression
2.1.
Linear Functions
2.2.
Loss Function
2.2.1.
Mean Squared Error (MSE)
2.3.
Linear Regression Problem
2.4.
Linear Regression With Feature Maps
2.4.1.
Feature Maps
2.4.2.
Algorithm for Non-linear Regression
2.5.
Training/Test Split
2.6.
Bias and Variance: way to assess overfitting and underfitting
2.6.1.
Effective Bias/Variance Under Varying Dataset Sizes
2.7.
Regularization: way to fix underfitting and overfitting
2.7.1.
Intuition on L2 Regularization
2.8.
Cross-Validation for Hyperparameter Tuning / Model Selection
2.8.1.
Hyperparameter Tuning
2.8.2.
Validation Set
2.8.2.1.
Cross Validation
2.9.
Other Performance Metrics
2.10.
Optimizing the MSE Loss
2.10.1.
Closed-Form Solution
2.10.2.
Gradient Descent
2.10.2.1.
Batch Gradient Descent & Stochastic Gradient Descent
2.10.3.
Optimizing Regularized Linear Regression
2.10.3.1.
L2-regularized Linear Regression
2.11.
Feature Preprocessing And Selection
2.11.1.
Feature Standardization
2.11.2.
Feature map
2.11.3.
Feature Selection
2.11.4.
Handling Missing Values
2.12.
L1 Regularization (LASSO) And Automatic Feature Selection
3.
Logistic Regression
3.1.
Linear Regression to Binary Classification
4.
Neural Networks
4.1.
A Framework Uniting Many ML Hypothesis Class
4.2.
Neural Net Building Blocks and Notation
4.2.1.
Basic Structure
4.3.
Forward Propagation
4.4.
Backward Pass
4.4.1.
Loss functions
4.4.2.
Optimizer
4.4.3.
Back Propagation
4.4.4.
Computational Graph
4.4.4.1.
Computing Rules for Each Nodes
4.5.
The Neural Net Toolbox
4.5.1.
Optimization
4.5.1.1.
Mini-Batch SGD
4.5.1.2.
Momentum Gradient Descent
4.5.1.3.
Adaptive Learing Rates
4.5.1.4.
Set the Learning Rate
4.5.2.
Activation Functions
4.5.3.
Managing Weights
4.5.3.1.
Initialization
4.5.3.2.
Batch Normalization
4.5.3.3.
Regularization
4.5.3.4.
Dropout
4.5.4.
Managing Training
4.5.5.
Data Augmentation
4.5.6.
Hyperparameter Optimization
4.5.6.1.
Coarse-to-fine Search
4.5.7.
Cross-validation vs a Single Validation Set
4.6.
Practical Tips for Training Neural Nets
5.
K-Nearest Neighbors
5.1.
The Definition of “Nearest”
5.2.
“Non-parametric” Machine Learning Approches
5.3.
Hyperparameters in KNN